Opponent Modeling in Deep Reinforcement Learning
نویسندگان
چکیده
Opponent modeling is necessary in multi-agent settings where secondary agents with competing goals also adapt their strategies, yet it remains challenging because strategies interact with each other and change. Most previous work focuses on developing probabilistic models or parameterized strategies for specific applications. Inspired by the recent success of deep reinforcement learning, we present neural-based models that jointly learn a policy and the behavior of opponents. Instead of explicitly predicting the opponent’s action, we encode observation of the opponents into a deep Q-Network (DQN); however, we retain explicit modeling (if desired) using multitasking. By using a Mixture-of-Experts architecture, our model automatically discovers different strategy patterns of opponents without extra supervision. We evaluate our models on a simulated soccer game and a popular trivia game, showing superior performance over DQN and its variants.
منابع مشابه
On the usefulness of opponent modeling: the Kuhn Poker case study
The application of reinforcement learning algorithms to Partially Observable Stochastic Games (POSG) is challenging since each agent does not have access to the whole state information and, in case of concurrent learners, the environment has non-stationary dynamics. These problems could be partially overcome if the policies followed by the other agents were known, and, for this reason, many app...
متن کاملOn the Usefulness of Opponent Modeling: the Kuhn Poker case study (Short Paper)
The application of reinforcement learning algorithms to Partially Observable Stochastic Games (POSG) is challenging since each agent does not have access to the whole state information and, in case of concurrent learners, the environment has non-stationary dynamics. These problems could be partially overcome if the policies followed by the other agents were known, and, for this reason, many app...
متن کاملA Reinforcement Learning Agent for 1-Card Poker
Modeling and reasoning about an opponent in a competitive environment is a difficult task. This paper uses a reinforcement learning framework to build an adaptable agent for the game of 1-card poker. The resulting agent is evaluated against various opponents and is shown to be very competitive.
متن کاملCombining Opponent Modeling and Model-Based Reinforcement Learning in a Two-Player Competitive Game
When an opponent with a stationary and stochastic policy is encountered in a twoplayer competitive game, model-free Reinforcement Learning (RL) techniques such as Q-learning and Sarsa(λ) can be used to learn near-optimal counter strategies given enough time. When an agent has learned such counter strategies against multiple diverse opponents, it is not trivial to decide which one to use when a ...
متن کاملLearning Against Non-Stationary Agents with Opponent Modelling & Deep Reinforcement Learning
Humans, like all animals, both cooperate and compete with each other. Through these interactions we learn to observe, act, and manipulate to maximise our utility function, and continue doing so as others learn with us. This is a decentralised non-stationary learning problem, where to survive and flourish an agent must adapt to the gradual changes of other agents as they learn, as well as capita...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2016